In telecommunications, round-trip delay ( RTD) or round-trip time ( RTT) is the amount of time it takes for a signal to be sent plus the amount of time it takes for acknowledgement of that signal having been received. This time delay includes propagation times for the paths between the two communication endpoints. In the context of computer networks, the signal is typically a data packet. RTT is commonly used interchangeably with ping time, which can be determined with the ping command. However, ping time may differ from experienced RTT with other protocols since the payload and priority associated with ICMP messages used by ping may differ from that of other traffic.
End-to-end delay is the length of time it takes for a signal to travel in one direction and is often approximated as half the RTT.
Networks with both high bandwidth and a high RTT (and thus high bandwidth-delay product) can have large amounts of data in transit at any given time. Such long fat networks require a special protocol design. One example is the TCP window scale option.
The RTT was originally estimated in TCP by:
where is constant weighting factor (). Choosing a value for close to 1 makes the weighted average immune to changes that last a short time (e.g., a single segment that encounters long delay). Choosing a value for close to 0 makes the weighted average respond to changes in delay very quickly. This was improved by the Jacobson/Karels algorithm, which takes standard deviation into account as well. Once a new RTT is calculated, it is entered into the equation above to obtain an average RTT for that connection, and the procedure continues for every new calculation.
|
|